Nature Human Behaviour
○ Springer Science and Business Media LLC
Preprints posted in the last 90 days, ranked by how well they match Nature Human Behaviour's content profile, based on 85 papers previously published here. The average preprint has a 0.15% match score for this journal, so anything above that is already an above-average fit.
Wang, W.; Kaufmann, T.; Dayan, P.
Show abstract
Inhibition is a core cognitive control function whose competence is distributed across the population, with more extreme impairments in psychiatric conditions such as attention deficit hyperactivity disorder (ADHD). The Stop Signal Task (SST) is a widely used paradigm for assessing this ability. However, conventional formalizations of SST performance, such as the independent race model, rely on assumptions that are frequently violated in modern experimental designs. Furthermore, the typical focus is on fitting mean reaction times, overlooking trial-by-trial dynamics. To address these limitations, we model the SST as a partially observable Markov decision process. This framework characterizes inhibitory control through distinct components: noisy perceptual inference regarding stimuli, and optimal control balanced against potential costs. To assess the ability of the model to capture the distribution of inhibitory capacities, we fit it to data from the large Adolescent Brain Cognitive Development (ABCD) study baseline cohort (N = 5,114). To do this, we adapted Simulation-Based Inference with a transformer-based encoder. This architecture learns compact, sequence-aware embeddings from raw behavioral data. These embeddings enable amortized inference of individual-level parameter posteriors in an efficient and reliable end-to-end manner, as confirmed by extensive validation. We identified distinct computational phenotypes associated with ADHD traits. Children with higher ADHD scores exhibited greater directional imprecision, a diminished intrinsic penalty for inhibition failures, and a more deterministic response style. Notably, the learned embedding space reveals a continuous manifold where children with the higher ADHD scores are heterogeneously distributed, rather than forming distinct disorder clusters. This indicates that similar clinical traits can emerge from diverse combinations of computational mechanisms, supporting a dimensional perspective on neurodiversity. Our framework can be extended to a broader range of cognitive tasks, offering a scalable solution for fitting complex models to large-scale behavioral data. Author summaryInhibitory control is essential for adjusting thoughts and behavior and is often impaired in conditions like ADHD. Traditional models of the Stop Signal Task (SST) often oversimplify the complex decision-making involved. We formalized these cognitive processes using a more biologically grounded framework (POMDP). This approach separates perceptual processing from control adjustments and remains valid in diverse experimental designs where traditional models fail. To apply the model at scale, we developed a specialized machine learning approach (TeSBI). This allowed us to efficiently reverse-engineer individual cognitive profiles. Applying it to the ABCD dataset (which includes more than 5,000 children), we found that higher ADHD scores are linked to specific computational deficits: noisy sensory processing, a lack of concern for errors, and a deterministic response style. Crucially, children with higher ADHD scores did not form a single disorder cluster but displayed diverse cognitive combinations, supporting a dimensional view of neurodiversity. Our results show that our model effectively captures complex inhibition mechanisms. By combining theory-driven cognitive modeling with scalable data-driven inference, this framework enables the precise analysis of large-scale behavioral datasets. This paves the way for more personalized approaches in computational psychiatry by recognizing the heterogeneity within clinical traits.
Potter, H. G.
Show abstract
Generative artificial intelligence (genAI) tools are increasingly used by prospective higher education (HE) applicants seeking guidance on university and programme selection. Despite rapidly expanding use, little is known about how genAI systems may introduce or amplify bias in undergraduate admissions decision-making. Here, we systematically examined patterns of bias across three widely used genAI chatbots (ChatGPT, Copilot, Gemini) using neuroscience as a representative UK undergraduate programme. We constructed 216 prompts that varied by applicant characteristics (e.g. gender, study type, academic attainment). Each prompt was submitted to all three chatbots, generating 648 responses and 3240 individual programme recommendations. Output responses underwent text analysis (e.g. n-grams, gender-coded language), and national HE markers of esteem (REF21, TEF23, NSS24) were analysed. Applicant grades and priorities produced the strongest effects on genAI outputs. Higher-grade applicants and those prioritising research received significantly more masculine-coded language, independent of applicant gender. N-gram patterns also diverged: high-grade prompts more frequently elicited terms relating to excellence and research intensity, whereas lower-grade prompts produced greater emphasis on widening access. Recommendations were systematically skewed, with higher grades, private schooling, and research-focused priorities increasing the likelihood of recommending elite institutions and programmes with higher entry requirements. Critically, the gender-coded language of outputs predicted institutional characteristics: masculine-coded responses were associated with recommendations featuring higher entry thresholds and stronger research performance, while feminine-coded responses favoured institutions with higher student satisfaction. These findings reveal clear, systematic biases in how genAI guides prospective HE applicants. Such biases risk reinforcing existing educational and socioeconomic inequalities, underscoring the need for transparency, regulation, and oversight in the use of genAI within HE decision-making. HighlightsO_LIGenAI is widely used by HE applicants despite little study of its biases. C_LIO_LI216 prompts across 3 chatbots generated 3240 programme suggestions. C_LIO_LIGrades and priorities drove major shifts in language and recommendations. C_LIO_LIGender-coded wording mapped onto research strength and entry standards. C_LIO_LIGenAI biases may reinforce inequalities in HE admissions decision-making. C_LI
Jeong, B.; Yoon, D.
Show abstract
The Balloon Analogue Risk Task (BART) is widely used to assess risk-taking and impulsivity, yet existing computational models struggle to unify sequential and prior evaluation strategies or fully capture uncertainty-driven information-seeking behavior. To address this, we introduce a novel computational framework grounded in the Active Inference Framework (AIF), which conceptualizes behavior as the minimization of expected free energy. Model comparisons demonstrate that AIF-based models statistically outperform existing benchmarks. Furthermore, we applied this framework to investigate impulsivity in women with Premenstrual Syndrome (PMS). Our model revealed that the PMS group exhibited significantly higher values in inverse precision of policy ({beta}0) and the phase difference of this parameter was only observed in PMS group. This suggests that high {beta}0 serves as a robust computational marker, reflecting both the trait impulsivity inherent in PMS and its state-like exacerbation across the menstrual cycle. Lastly, our findings indicate that impulsivity in PMS manifests not as a learning deficit, but as heightened sensitivity to trial-by-trial sequential evaluation at the expense of stable, pre-planned prior policies. This framework provides a neurobiologically plausible and mechanistically granular understanding of risk-taking, offering new avenues for computational psychiatry.
Omar, M.; Agbareia, R.; McGreevy, J.; Zebrowski, A.; Ramaswamy, A.; Gorin, M.; Anato, E. M.; Glicksberg, B. S.; Sakhuja, A.; Charney, A.; Klang, E.; Nadkarni, G.
Show abstract
Large language models are increasingly used for clinical guidance while their parent companies introduce advertising. We tested whether pharmaceutical ads embedded in the prompts of 12 models from OpenAI, Anthropic, and Google shift drug recommendations across 258,660 API calls and four experiments probing distinct epistemic conditions. When two drugs were both guideline appropriate, advertising shifted selection of the advertised drug by +12.7 percentage points (P < 0.001), with some model scenario pairs shifting from 0% to 100%. Google models were the most susceptible (+29.8 pp), followed by OpenAI (+10.9 pp), while Anthropic models showed minimal change (+2.0 pp). When the advertised product lacked evidence or was clinically suboptimal, models resisted. This reveals a structured vulnerability: advertising does not override medical knowledge but fills the space where clinical evidence is underdetermined. An open response sub analysis (2,340 calls across three representative models) confirmed that advertising restructures free-text clinical reasoning: models echoed ad claims at 2.7 times the baseline rate while maintaining high stated confidence and rarely disclosing the ad. Susceptibility was provider dependent (Google: +29.8 pp; OpenAI: +10.9 pp; Anthropic: +2.0 pp). Because this bias operates within clinically correct answers, it is invisible to accuracy based evaluation, identifying a class of AI safety vulnerability that standard testing cannot detect.
Gardiner, A.; Mathes, G. H.; Cooper, R.; Kocakova, K.; Villafana, J. A.; Silvestro, D.; Pimiento, C.
Show abstract
We reconstructed the neoselachian diversity over the past 145 million years using new occurrence dataset and DeepDive1-3. We recovered a small decline through the K/Pg following a steady increase during the Cretaceous, and a prolonged, substantial decline towards the present following a mid-Eocene peak2. Guinot et al. argue that our conclusions are compromised by problems in the underlying data and by the way extinction magnitude across the K/Pg was quantified. They cast doubt particularly on the pattern across the K/Pg, which they consider to be at odds with all previous analyses. They raise no issue with the Cretaceous trend, even though it was recovered with the same dataset and methods. We audited the alleged data issues reported in Guinot et al. and found that they mostly reflect operational choices (see Supplementary Information). However, we applied their data treatment and ran sensitivity tests to evaluate how this approach affects our results, specifically around the K/Pg. None of our tests recovered a diversity collapse for neoselachians during this interval. As such, we demonstrate that our findings are robust and consistent across different data treatments.
Huang, Q.; Doeller, C. F.
Show abstract
Human cognition is capacity-limited, requiring strategies to actively structure information. Eye movements offer a natural mechanism for sequential sampling, but whether such sequences organize mnemonic representations is unknown. We developed a working-memory task where color-frequency pairings created a consistent latent ordinal structure to optimally reduce memory load. Across two experiments, gaze patterns spontaneously aligned with this structure. Participants sampled items following this sequence during encoding and covertly replayed them during maintenance. Critically, the expression of this structure depended on cognitive demand. In a 3-item task, high performers showed robust sequential sampling during encoding, whereas lower performers compensated with replay-like revisitation during maintenance. Under higher demand (4 items), encoding-based organization was disrupted, and structured replay emerged primarily during maintenance to support memory. These findings show that eye movements do more than reflect memory; they actively organize it, revealing a flexible, behavioral analogue to neural replay when encoding resources are strained.
Maher, C.; Saez, I.; Radulescu, A.
Show abstract
In complex environments, available information does not uniquely define state, requiring attention learning to identify features relevant for learning and decision-making. As a result, human decisions often reflect reasoning that cannot be directly observed from choice. This dual opacity, at the level of agent and observer, poses a fundamental challenge for understanding naturalistic behavior. We inferred latent attention during learning and decision-making by training recurrent neural networks on synthetic data generated from two classes of attention learning models: feature-based reinforcement learning (FRL), in which attention emerges through retrospective value updating, and serial hypothesis testing (SHT), in which discrete hypotheses are prospectively sampled. A network trained on hybrid (FRL+SHT) synthetic data outperformed single-model networks, decoding latent human attention with more than 80% accuracy. This work provides a new approach for decoding latent attention and suggests a mechanism of attention learning wherein value-derived hypotheses are continuously tested against incoming evidence.
Mishra, P.; Gandhi, T. K.; Gandhi, S. R.
Show abstract
Modern digital environments expose the brain to a dense stream of personally meaningful cues that differ markedly from the conditions under which sensory systems evolved. Understanding how the brain responds and adapts to such environments is increasingly relevant for cognitive and mental well-being. In this study, we used auditory event-related potentials (ERPs) recorded during a task-irrelevant auditory oddball paradigm to obtain a time-resolved account of neural responses to smartphone notifications. We dissociated the effects of generic notification sounds from the learning-related effects of personalized notification sounds, and examined how these responses varied across levels of smartphone use. Among early stages of processing, generic tones exhibited experience-dependent strengthening of auditory representations, whereas personalized tones engaged stable, pre-learned representations, as reflected in P2 dynamics. The strongest stimulus-dependent modulation was observed in pre-attentive salience detection, reflected in the mismatch negativity (MMN), which was earlier and larger for personalized tones compared to generic ones. At later stages, attentional orienting and extended evaluative processing showed distinct patterns that were differentially sensitive to stimulus identity and smartphone use. Notably, smartphone usage levels modulated later attentional and evaluative stages (P3a and LPP) without altering early pre-attentive prediction-error signaling. Overall, our findings demonstrate that personalized smartphone notifications do not uniformly increase neural distraction. Instead, they engage robust and rapid pre-attentive prediction-error signaling that is largely independent of usage, while habitual smartphone use selectively shapes later attentional orienting and evaluative dynamics. Significance statementModern digital environments are saturated with personalized cues, such as smartphone notifications, that are known to disrupt ongoing behavior. A common assumption in discussions of digital distraction is that repeated exposure to such cues progressively amplifies their sensory salience, rendering habitual users increasingly reactive. Here, we challenge this assumption using an ecologically grounded neural paradigm with personalized notification sounds. We show that personalized notifications engage robust early salience detection that is shared across users, while habitual smartphone use selectively modulates later attentional and evaluative processing. These findings place important constraints on how repeated exposure to personalized digital cues shapes neural processing, indicating that usage-related effects emerge downstream rather than at early sensory stages.
Conti, G.; Weber Costa, G.; D'Mello, D.; Yu, Y.
Show abstract
Health visiting is England's universal home visiting programme for families with children under five and a key pillar of early intervention policy. Since the 2015 devolution of commissioning to Local Authorities (LAs), the service has faced sustained financial and workforce pressures, yet there is limited systematic evidence on whether resources and delivery have evolved differentially across areas and along the deprivation gradient. Using new Freedom of Information (FOI) data, we estimate how health visiting inputs (spending and workforce) and mandated contact delivery vary in levels and trajectories by baseline deprivation. FOI requests covered 147 English LAs (four pairs submitted joint returns), providing annual 2016-2021 Full-Time Equivalent (FTE) data on Health Visitors (HVs) and Clinical Skill Mix Staff (CSMS), which we link to DHSC Health Visitor Service Delivery Metrics reporting completion of the five mandated 0-5 reviews (New Birth Visits, 6-8 week reviews, 12-month reviews, 2-2.5 year reviews, and 2-2.5 year reviews completed with ASQ-3) and to LA revenue outturn expenditure on mandated and non-mandated 0-5 public health services (real-terms total and per child under five). Between 2016 and 2021, HV FTE fell by around one-fifth while CSMS expanded by roughly one-third, consistent with an overall contraction and a shift toward lower-band staff. To test whether these changes map onto underlying disadvantage, we stratify LAs into tertiles of baseline deprivation using the 2015 Income Deprivation Affecting Children Index (IDACI) and implement a three-part empirical strategy: (i) plotting tertile means over time, (ii) testing within-year cross-sectional differences using parametric and non-parametric methods with pairwise comparisons, and (iii) estimating LA fixed-effects regressions with Year x IDACI interactions under both a flexible year-by-year specification and a parsimonious linear-trend specification to assess differential trajectories. We find persistent cross-sectional gradients in per-child spending that are broadly progressive (more deprived LAs spend more per child on both mandated and non-mandated 0-5 services), while fixed-effects models show little evidence that spending trajectories differ systematically by deprivation. Workforce trends are more uneven: HV FTE declines more slowly and CSMS FTE grows more slowly in more deprived LAs in the linear-trend specification, while per-child HV trajectories show no differential trends. Despite these input differences, completion of mandated contacts is relatively stable across the deprivation gradient; the only consistent differential trend is faster improvement in the 6-8 week review in more deprived areas. Meanwhile, caseload pressure rises, increasing most sharply in the most deprived LAs in the pre-pandemic years, suggesting that completion-based performance measures may mask heterogeneities in service capacity and intensity. Finally, we quantify the resources required to restore recommended caseloads, implying the need for approximately 3,100 additional FTE staff and around 120 million GBP annually (plus training costs).
Okada, H.; Seno, S.; Chung, U.-i.; Ohkura, N.
Show abstract
Recent large-scale bibliometric analyses suggest that individual AI use can increase productivity while reducing downstream engagement and topic diversity. Here we ask whether collective AI deployment is associated with shared engagement. Using an Audience Response Engagement (ARE) system at NGS Expo 2025 (N=110 biomedical researchers), we captured real-time consensus and generated updated visualizations within minutes. Our data reveal a substantial gap between adoption and transparency: 93.6% of researchers use AI at least weekly, yet only 5.5% consistently disclose this usage--a 17-fold disparity. This pattern is consistent with systemic policy uncertainty (39.1% report unclear guidelines). Behavioral clustering identifies a "High-Concern" group (31%) as a candidate for targeted interventions: highest productivity yet lowest disclosure. These findings suggest that collective AI deployment in physical settings is associated with shared engagement.
Wiederhold, B.; Stemmler, M. B.; Herz, A.
Show abstract
While our senses transmit information at rates exceeding 106 bit/s, high-level cognitive processing is thought to be much slower, on the order of 10 bit/s regardless of the task1. It is unclear, though, whether this limit holds when the human mind is challenged. To test how fast one can process abstract information, we analyzed mental calculations, using data from international competitions and record-setting attempts. We discovered scaling laws relating task complexity and completion speed. High information rates were sustained for minutes, peaking at above 215 bit/s for the shortest calculations, split into 125 bit/s for conscious perception and 90 bit/s for algorithmic execution. These rates are well in excess of previous estimates for high-level mental abilities.
Hong, X.; Hutchins, B. I.; Ni, C.
Show abstract
The preprint ecosystem has expanded rapidly over the past decade, fundamentally altering science communication. Yet, the scholarly communitys attitudes toward this shift remain underexplored. Through a large-scale survey of US and Canadian biomedical scholars, we provide a comprehensive analysis of preprint utilization, perceived impact, and integration into academic credit systems. We find robust engagement across reading, citing, and submitting preprints; however, this activity is driven primarily by a desire for rapid dissemination rather than a foundational commitment to open science. Furthermore, while preprints are valued as networking assets, perceived career penalties during formal academic evaluations stifle broader cultural adoption. Crucially, to navigate the absence of formal peer review, scholars report a heavy reliance on author reputation as a primary heuristic to evaluate a preprints credibility and guide their reading and citation decisions. Notably, despite acknowledging preprints role in accelerating knowledge sharing, scholars express significant concerns regarding fraud and misinformation, particularly amid declining public trust in science and emerging threats to scientific integrity from artificial intelligence. To resolve these tensions, the preprint ecosystem must evolve beyond prioritizing speed to foster genuine academic dialogue. Simultaneously, evaluation frameworks must adapt to the realities of preprinting, and innovative quality-control mechanisms are urgently needed to balance rapid dissemination with rigorous scientific integrity.
Lin, Y.; Plomin, R.
Show abstract
The most highly predictive polygenic scores in the behavioural sciences are for cognitive traits, especially general cognitive ability (g) and educational attainment. We combined polygenic scores derived from genome-wide association studies of adult g and educational attainment to create adult 'polygenic g scores' which we used to chart the course of cognitive development of 10,000 white British children from toddlerhood through early adulthood. We integrated cross-sectional regression, latent growth curve, and confirmatory factor analysis to systematically characterise cognitive development. Polygenic g score showed minimal prediction in toddlerhood, modest prediction in childhood, and substantial prediction by early adulthood accounting for 12% of the variance. Higher polygenic g scores were associated with faster cognitive growth in latent growth models. Prediction was strongest for a cross-time latent cognitive factor (15%) capturing cognitive ability across development. By integrating polygenic prediction directly into a structural equation model framework, we provided a theoretical upper bound of genetic influences on g under minimal measurement error. We also examined the polygenic g score's prediction of educational achievement, behaviour problems, and anthropometric outcomes and found similar developmental increases in prediction for educational achievement. Together, our findings demonstrate that adult polygenic g scores can be a useful tool for charting the development of cognitive traits.
Arisido, M. W.; Borges, M. C.; Giambartolomei, C.; McBride, N.; Joaquim Hofmeister, R.; Kutalik, Z.; Magnus, M. C.; Zuccolo, L.
Show abstract
Despite well-established benefits to mothers and children, breastfeeding rates fall short of WHO recommendations world-wide. To inform effective support strategies, we investigated how maternal factors influence breastfeeding success. We estimated the causal effects of sociodemographic, cardiometabolic, psychiatric, and perinatal factors on breastfeeding initiation, duration, and exclusivity, by triangulating Mendelian randomization and multivariable regression analyses using data from 72,653 mothers and 317,651 offspring across four European cohorts. Triangulated results robustly demonstrated that higher education, lower BMI, and lower propensities for smoking, insomnia, and depression improved breastfeeding success. Each additional 3.4years in education increased initiation odds by 2.32 folds (95% CI:1.94,2.77) and prolonged exclusive breastfeeding ({beta}=0.21standard deviations, 95% CI:0.17,0.24). Smoking, depression and BMI mediated 26%, 14%, and 12% of education effect on exclusivity, respectively. We found little evidence for effects of blood pressure, cholesterol and perinatal factors. We provide new robust evidence that maternal cardiometabolic and psychiatric factors partially mediate the causal effect of maternal education on breastfeeding. Interventions targeting maternal health could support breastfeeding, reducing maternal and infant health disparities.
Nagar, N.; Vasilchenko, K.; Jangraw, D. C.; Rutledge, R. B.; Keren, H.
Show abstract
Mood shapes behavior, physiology, and well-being. While mood continuously fluctuates in response to external events, it can also remain surprisingly stable, raising a fundamental question: what shapes emotional well-being, mood stability or variability? Quantifying mood along these two dimensions is challenging. We combined a closed-loop mood modulating task with computational modeling to derive quantitative markers of emotional stability and variability and map them onto the continuum from well-being to depressive symptoms. Participants (n=209) experienced adaptive reward-based mood modulation and modeling revealed that higher well-being associates with mood variability and stronger weighting of recent events, whereas depressive symptoms were associated with greater mood stability and stronger anchoring to earlier events. Results replicated in an independent test-retest dataset, which also showed that these mood updating parameters are reliable across days. Our findings identify adaptive responsiveness to recent experiences as a quantitative marker of well-being and unravel the temporal mood dynamics in health and depression.
Ging-Jehli, N.; Childers, R. K.
Show abstract
Significance StatementAdaptive behavior depends on knowing when to persist and when to let go; even when letting go appears as avoidance. While classical accounts of avoidance emphasize reward-effort trade-offs, we show that these decisions are critically guided by meta-control and inferences about outcome controllability and agency. Using a novel paradigm, we dissociate drivers of avoidance and demonstrate that threat does not uniformly promote disengagement. When outcome control is preserved, threat instead increases persistence, particularly following experiences that build agency in failure-safe contexts. We formalize these dynamics in the Meta-Arbitration of Control and Agency Q-learning (MACA-Q) model, which captures how experience-dependent beliefs about agency guide learning and choice across contexts. Our results show that similar avoidance behaviors can arise from distinct computational pathways. This shifts the focus from global avoidance biases to the dynamic regulation of agency as a core principle of adaptive behavior, with implications for neuroscience, psychiatry, and adaptive artificial intelligence. Adaptive behavior requires deciding when to persist and when to disengage under uncertainty and partial outcome control. Avoidance has often been studied as a response to threat or cost, yet existing paradigms cannot disentangle whether disengagement reflects threat sensitivity, expected failure, or reduced perceived control. We introduce a persistence-escape paradigm that independently manipulates incentive structures, effort demands, and outcome controllability. In a large online sample (N = 457), we show that avoidance is context-dependent rather than a stable, global trait. When outcome control was preserved under threat, the typical avoidance response reversed, promoting persistence rather than withdrawal. At the individual level, high-performing individuals were not uniformly more persistent, but more selective, disengaging when control was low. Moreover, higher anxiety symptoms were linked to cost-dominant evaluation and reduced use of accumulated competence. Conversely, higher depressive symptoms were linked to diminished sensitivity to effort and higher expected failure. To explain these behavioral patterns, we developed the Meta-Arbitration of Control and Agency Q-learning (MACA-Q) model, which embeds value learning and affective evaluation within a meta-control architecture. Critically, we formalize agency as a dynamically inferred learning gate, distinct from self-efficacy, that determines whether outcomes are treated as informative based on controllability and feedback reliability. The model explains context-specific avoidance and reveals that similar behaviors can arise from distinct computational pathways. It further shows how experience in failure-safe contexts guides subsequent behavior in adverse contexts. Our findings show that avoidance is guided by the dynamic regulation of engagement based on inferred controllability and competence. By combining a novel paradigm with a computational model, we provide a formal account of agency and a unifying framework in which meta-control regulates adaptive and maladaptive engagement across contexts, with implications for neuroscience, psychiatry, and adaptive artificial intelligence.
Sugawara, M.; Mano, Y.; Aoki, Y.; Nakaya, K.; Matsuda, Y.; Toyama, A.; Suzuki, S.
Show abstract
Subjective valuation of food rewards guides our dietary choices and is fundamental to human health and well-being. Extensive literature in human functional magnetic resonance imaging (fMRI) studies has consistently shown that a network of reward-processing brain regions, including the ventromedial prefrontal cortex (vmPFC) and ventral striatum, encodes the subjective values of food rewards. However, the representational geometry of value signals and the mechanisms by which they are constructed in the brain remain poorly understood. This is partly because most fMRI studies on food valuation rely on small stimulus sets, yielding datasets too shallow for advanced analyses such as multi-voxel pattern analysis, and deep neural network modeling. Here, we present a densely sampled fMRI dataset wherein 31 participants provided subjective value ratings for over 500 food images across three separate days. We validate the dataset by replicating the well-established findings regarding the neural encoding of subjective value in the vmPFC and ventral striatum. We anticipate this resource will facilitate diverse studies on neural food valuation using advanced analytical methods.
Platonova, O.; Dogonasheva, O.; Giraud, A.-L.; Bouton, S.
Show abstract
Speech comprehension draws on both temporal structure and contextual prediction, yet how these mechanisms coordinate is poorly understood. Time-compressed speech provides a controlled probe: by degrading temporal structure, it reveals the architecture of ordinary speech comprehension. Using 3x compression with silence insertion, we varied delivery rate, temporal regularity, and boundary alignment (syllabic vs. time-defined) across two behavioural experiments. Comprehension peaked near the upper theta boundary and declined at slower and faster rates. Temporal regularity helped only when boundaries coincided with syllabic onsets, while periodic pacing alone was insufficient. Contextual predictability (word-level entropy) facilitated comprehension when temporal cues were least effective, but only under syllabic segmentation. Computational modeling confirmed that {beta}-mediated contextual prediction selectively benefited syllabic-aligned conditions, was detrimental under time-based segmentation, and better reproduced human pattern overall. Together, these results suggest that contextual prediction is continuously active but behaviorally visible only when temporal scaffolding is insufficient and syllabic structure is preserved.
Higashi, H.
Show abstract
Extracting stable individual traits from behavior observed across diverse contexts is a central challenge in behavioral modeling. We propose a framework for inferring domain-invariant individual latent representations by jointly encoding behaviors across multiple domains. Using large-scale telemetry data from professional Counter-Strike 2 gameplay, we demonstrate that these representations are stable across distinct environments and roles, improving behavior prediction in novel domains. Our analysis reveals that complex idiosyncratic movement policies can be effectively compressed into low-dimensional embeddings, with as few as two dimensions capturing the majority of individual strategic variation. Crucially, the learned latent space forms a structured metric space where Euclidean distances predict the degradation of transfer performance. Furthermore, we show that the latent axes align with interpretable behavioral phenotypes, such as risk-taking and social cohesion. These findings suggest that multi-domain integration is a robust method for uncovering the functional structure of latent individuality in complex decision-making tasks, bridging the gap between high-dimensional telemetry data and meaningful psychological constructs.
Khatun, M.; Patel, N.; Loid, M.; Destouni, A.; Lingasamy, P.; S, S. L.; Peters, M.; Sharma, R.; Salumets, A.; Modhukur, V.
Show abstract
Infertility generates profound psychological and social distress for both women and men, yet mens communicative experiences remain comparatively underexamined. Male infertility (MI) is often shaped by stigma, norms of masculinity, and limited opportunities for emotional disclosure, constraining help-seeking in offline settings. This study investigates how men use anonymous online peer-support spaces to discuss MI by analyzing discussions from the r/maleinfertility subreddit on Reddit. Using natural language processing techniques, we examined 10,769 posts and 80,381 comments published between 2013 and 2025. Analyses assessed sentiment and emotional expression, topic structure, hyperlink networks, and discussions related to diagnostic testing, treatment decision-making, and donor sperm use. Topic modeling revealed a functional differentiation between posts and comments. Original posts primarily focused on clinical sense-making, including interpretation of semen analyses, hormonal testing, and assisted reproduction options. In contrast, comments emphasized emotional validation, experiential knowledge-sharing, and normalization of alternative family-building pathways. Emotional expression varied by discussion topic, with heightened fear and sadness in conversations involving genetic testing, surgical sperm retrieval, and donor sperm. Hyperlink analysis indicated frequent engagement with peer-reviewed medical information, reflecting active evidence-seeking alongside peer exchange. Taken together, findings suggest that anonymous online communities function as critical infrastructures of support for men experiencing infertility, enabling forms of disclosure and vulnerability often constrained in offline contexts. These spaces facilitate interpretation of medical information, collective coping, and decision-making regarding treatment and donor options. The study highlights the role of digital anonymity in mitigating stigma and expanding communicative possibilities for men navigating infertility alongside clinical care.